Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available January 1, 2026
-
The controller area network (CAN) remains the de facto standard for intra-vehicular communication. CAN enables reliable communication between various microcontrollers and vehicle devices without a central computer, which is essential for sustainable transportation systems. However, it poses some serious security threats due to the nature of communication. According to caranddriver.com, there were at least 150 automotive cybersecurity incidents in 2019, a 94% year-over-year increase since 2016, according to a report from Upstream Security. To safeguard vehicles from such attacks, securing CAN communication, which is the most relied-on in-vehicle network (IVN), should be configured with modifications. In this paper, we developed a configurable CAN communication protocol to secure CAN with a hardware prototype for rapidly prototyping attacks, intrusion detection systems, and response systems. We used a field programmable gate array (FPGA) to prototype CAN to improve reconfigurability. This project focuses on attack detection and response in the case of bus-off attacks. This paper introduces two main modules: the multiple generic errors module with the introduction of the error state machine (MGEESM) module and the bus-off attack detection (BOAD) module for a frame size of 111 bits (BOAD111), based on the CAN protocol presenting the introduction of form error, CRC error, and bit error. Our results show that, in the scenario with the transmit error counter (TEC) value 127 for switching between the error-passive state and bus-off state, the detection times for form error, CRC error, and bit error introduced in the MGEESM module are 3.610 ms, 3.550 ms, and 3.280 ms, respectively, with the introduction of error in consecutive frames. The detection time for BOAD111 module in the same scenario is 3.247 ms.more » « less
-
Organizations managing high-performance computing systems face a multitude of challenges, including overarching concerns such as overall energy consumption, microprocessor clock frequency limitations, and the escalating costs associated with chip production. Evidently, processor speeds have plateaued over the last decade, persisting within the range of 2 GHz to 5 GHz. Scholars assert that brain-inspired computing holds substantial promise for mitigating these challenges. The spiking neural network (SNN) particularly stands out for its commendable power efficiency when juxtaposed with conventional design paradigms. Nevertheless, our scrutiny has brought to light several pivotal challenges impeding the seamless implementation of large-scale neural networks (NNs) on silicon. These challenges encompass the absence of automated tools, the need for multifaceted domain expertise, and the inadequacy of existing algorithms to efficiently partition and place extensive SNN computations onto hardware infrastructure. In this paper, we posit the development of an automated tool flow capable of transmuting any NN into an SNN. This undertaking involves the creation of a novel graph-partitioning algorithm designed to strategically place SNNs on a network-on-chip (NoC), thereby paving the way for future energy-efficient and high-performance computing paradigms. The presented methodology showcases its effectiveness by successfully transforming ANN architectures into SNNs with a marginal average error penalty of merely 2.65%. The proposed graph-partitioning algorithm enables a 14.22% decrease in inter-synaptic communication and an 87.58% reduction in intra-synaptic communication, on average, underscoring the effectiveness of the proposed algorithm in optimizing NN communication pathways. Compared to a baseline graph-partitioning algorithm, the proposed approach exhibits an average decrease of 79.74% in latency and a 14.67% reduction in energy consumption. Using existing NoC tools, the energy-latency product of SNN architectures is, on average, 82.71% lower than that of the baseline architectures.more » « less
-
Abstract: Surface acoustic wave (SAW) sensors with increasingly unique and refined designed patterns are often developed using the lithographic fabrication processes. Emerging applications of SAW sensors often require novel materials, which may present uncharted fabrication outcomes. The fidelity of the SAW sensor performance is often correlated with the ability to restrict the presence of defects in post-fabrication. Therefore, it is critical to have effective means to detect the presence of defects within the SAW sensor. However, labor-intensive manual labeling is often required due to the need for precision identification and classification of surface features for increased confidence in model accuracy. One approach to automating defect detection is to leverage effective machine learning techniques to analyze and quantify defects within the SAW sensor. In this paper, we propose a machine learning approach using a deep convolutional autoencoder to segment surface features semantically. The proposed deep image autoencoder takes a grayscale input image and generates a color image segmenting the defect region in red, metallic interdigital transducing (IDT) fingers in green, and the substrate region in blue. Experimental results demonstrate promising segmentation scores in locating the defects and regions of interest for a novel SAW sensor variant. The proposed method can automate the process of localizing and measuring post-fabrication defects at the pixel level that may be missed by error-prone visual inspection.more » « less
-
The DARPA POSH program echoes with the research community and identifies that engineering productivity has fallen behind Moore’s law, resulting in the prohibitive increase in IC design cost at leading technology nodes. The primary reason is that it requires many computing resources, expensive tools, and even many days to complete a design implementation. However, at the end of this process, some designs could not meet the design constraints and become unroutable, creating a vicious circuit design cycle. As a result, designers have to re-run the whole process after design modification. This research applied a machine learning approach to automatically identify design constraints and design rule checking (DRC) violation issues and help the designer identify design constraints with optimal DRCs before the long detailed routing process through iterative greedy search. The proposed algorithm achieved up to 99.99% design constraint prediction accuracy and reduced 98.4% DRC violations with only a 6.9% area penalty.more » « less
An official website of the United States government
